Learning Bayesian network parameters under order constraints
نویسندگان
چکیده
منابع مشابه
Learning Bayesian network parameters under order constraints
We consider the problem of learning the parameters of a Bayesian network from data, while taking into account prior knowledge about the signs of influences between variables. Such prior knowledge can be readily obtained from domain experts. We show that this problem of parameter learning is a special case of isotonic regression and provide a simple algorithm for computing isotonic estimates. Ou...
متن کاملLearning Bayesian network parameters under equivalence constraints
We propose a principled approach for learning parameters in Bayesian networks from incomplete datasets, where the examples of a dataset are subject to equivalence constraints. These equivalence constraints arise from datasets where examples are tied together, in that we may not know the value of a particular variable, but whatever that value is, we know it must be the same across different exam...
متن کاملParameter Learning of Bayesian Network Classifiers Under Computational Constraints
We consider online learning of Bayesian network classifiers (BNCs) with reduced-precision parameters, i.e. the conditional-probability tables parameterizing the BNCs are represented by low bit-width fixedpoint numbers. In contrast to previous work, we analyze the learning of these parameters using reduced-precision arithmetic only which is important for computationally constrained platforms, e....
متن کاملLearning Bayesian network parameters under incomplete data with domain knowledge
Bayesian networks have gained increasing attention in recent years. One key issue in Bayesian networks (BNs) is parameter learning. When training data is incomplete or sparse or when multiple hidden nodes exist, learning parameters in Bayesian networks (BNs) becomes extremely difficult. Under these circumstances, the learning algorithms are required to operate in a high-dimensional search space...
متن کاملLearning Bayesian Networks under Equivalence Constraints (Abstract)
Machine learning tasks typically assume that the examples of a given dataset are independent and identically distributed (i.i.d.). Yet, there are many domains and applications where this assumption does not strictly hold. Further, there may be additional information available that ties together the examples of a dataset, which we could exploit to learn more accurate models. For example, there a...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: International Journal of Approximate Reasoning
سال: 2006
ISSN: 0888-613X
DOI: 10.1016/j.ijar.2005.10.003